|
Tikhonov regularization, named for Andrey Tikhonov, is the most commonly used method of regularization of ill-posed problems. In statistics, the method is known as ridge regression, and with multiple independent discoveries, it is also variously known as the Tikhonov–Miller method, the Phillips–Twomey method, the constrained linear inversion method, and the method of linear regularization. It is related to the Levenberg–Marquardt algorithm for non-linear least-squares problems. When the following problem is not well posed (either because of non-existence or non-uniqueness of ) : then the standard approach (known as ordinary least squares) leads to an overdetermined (Over-fitted), or more often an underdetermined (under-fitted) system of equations. Most real-world phenomena operate as low-pass filters in the forward direction where maps to . Therefore in solving the inverse-problem, the inverse mapping operates as a high-pass filter that has the undesirable tendency of amplifying noise (eigenvalues / singular values are largest in the reverse mapping where they were smallest in the forward mapping). In addition, ordinary least squares implicitly nullifies every element of the reconstructed version of that is in the null-space of , rather than allowing for a model to be used as a prior for . Ordinary least squares seeks to minimize the sum of squared residuals, which can be compactly written as : where is the Euclidean norm. In order to give preference to a particular solution with desirable properties, a regularization term can be included in this minimization: : for some suitably chosen Tikhonov matrix, . In many cases, this matrix is chosen as a multiple of the identity matrix (), giving preference to solutions with smaller norms; this is known as regularization. In other cases, lowpass operators (e.g., a difference operator or a weighted Fourier operator) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous. This regularization improves the conditioning of the problem, thus enabling a direct numerical solution. An explicit solution, denoted by , is given by: : The effect of regularization may be varied via the scale of matrix . For this reduces to the unregularized least squares solution provided that (ATA)−1 exists. regularization is used in many contexts aside from linear regression, such as classification with logistic regression or support vector machines, and matrix factorization. ==History== Tikhonov regularization has been invented independently in many different contexts. It became widely known from its application to integral equations from the work of Andrey Tikhonov and David L. Phillips. Some authors use the term Tikhonov–Phillips regularization. The finite-dimensional case was expounded by Arthur E. Hoerl, who took a statistical approach, and by Manus Foster, who interpreted this method as a Wiener–Kolmogorov filter. Following Hoerl, it is known in the statistical literature as ridge regression. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Tikhonov regularization」の詳細全文を読む スポンサード リンク
|